13 research outputs found

    Steerable filters generated with the hypercomplex dual-tree wavelet transform

    Get PDF
    The use of wavelets in the image processing domain is still in its infancy, and largely associated with image compression. With the advent of the dual-tree hypercomplex wavelet transform (DHWT) and its improved shift invariance and directional selectivity, applications in other areas of image processing are more conceivable. This paper discusses the problems and solutions in developing the DHWT and its inverse. It also offers a practical implementation of the algorithms involved. The aim of this work is to apply the DHWT in machine vision. Tentative work on a possible new way of feature extraction is presented. The paper shows that 2-D hypercomplex basis wavelets can be used to generate steerable filters which allow rotation as well as translation.</p

    A machine vision extension for the Ruby programming language

    Get PDF
    Dynamically typed scripting languages have become popular in recent years. Although interpreted languages allow for substantial reduction of software development time, they are often rejected due to performance concerns. In this paper we present an extension for the programming language Ruby, called HornetsEye, which facilitates the development of real-time machine vision algorithms within Ruby. Apart from providing integration of crucial libraries for input and output, HornetsEye provides fast native implementations (compiled code) for a generic set of array operators. Different array operators were compared with equivalent implementations in C++. Not only was it possible to achieve comparable real-time performance, but also to exceed the efficiency of the C++ implementation in several cases. Implementations of several algorithms were given to demonstrate how the array operators can be used to create concise implementations.</p

    Towards real-time object recognition using pairs of lines

    No full text
    This paper presents a description of the "Pairs Of Lines" object recognition algorithm used in the MIMAS Computer Vision toolkit. This toolkit was developed at Sheffield Hallam University and the "Pairs of Lines" method was used in a recent European Union funded project.(1) The algorithm was developed to enable a micro-robot system (Amavasai et al., InstMC Journal of Measurement and Control) to recognize geometric planar objects in real-time, in a noisy environment. The method involves using straight line segments which are extracted from both the known object models and from the visual scene that the objects are to be located in. Pairs of these straight lines are then compared. If there is a geometric match between the two pairs an estimate of the possible position, orientation and scale of the model in the scene is made. The estimates are collated, as all possible pairs of lines are compared. The process yields the position, orientation and scale of the known models in the scene. The algorithm has been optimized for speed. This paper describes the method in detail and presents experimental results which indicate that the technique exhibits robustness to camera noise and partial occlusion and produces recognition in times under 1 s on a desktop PC. Recognition times are shown to be from 2 to 16 times faster than with the well-studied pairwise geometric histograms method. Recognition rates of up to 80% were achieved with scenes having signal to noise ratios of 2.5. (c) 2005 Elsevier Ltd. All rights reserved

    An analysis of collective movement models for robotic swarms

    No full text
    A swarm is defined as a set of two or more independent homogeneous or heterogeneous agents acting in a common environment, in a coherent fashion, and which generates emergent behavior. The creation of artificial swarms or robotic swarms has attracted many researchers in the last two decades. Many studies have been undertaken using practical approaches to swarm construction such as investigating the navigation of the swarm, task allocation and elementary construction. This paper examines aggregations that emerge from three different movement models of relatively simple agents. The agents only differ in their maximum turning angle and their sensing range

    Development of a desktop freehand 3-D surface reconstruction system

    No full text
    This paper discusses the development of a freehand 3-D surface reconstruction system. The system was constructed by making use of readily available off-the-shelf components, namely a laser line emitter and a webcam. The 3-D laser scanner system allows the user to hand sweep the laser line across the object to be scanned. The 3-D surface information of the object Is captured as follows. A series of digital images of the laser line, generated by the intersection of the laser plane, the surface of the object and the background planar object were captured and stored in a PC. The points on the laser line were extracted. The 2-D laser points that were found on the surface of the planar object were projected onto the 3-D space using a pinhole camera model. The laser plane was calibrated. Using the 2-D laser points found on the surface of the 3-D object, a cloud of 3-D points which represent the surface of the object being scanned was generated by triangulation. For the laser plane calibration two different methods were implemented. Their performance were compared

    The role of sensory-motor coordination: identifying environmental motion dynamics with dynamic neural networks

    No full text
    We describe three recurrent neural architectures inspired by the proprioceptive system found in mammals; Exo-sensing, Ego-sensing, and Composite. Through the use of Particle Swarm Optimisation the robot controllers are adapted to perform the task of identifying motion dynamics within their environment. We highlight the effect of sensory-motor coordination on the performance in the task when applied to each of the three neural architectures

    A dissimilarity visualisation system for CT : Pilot study

    No full text
    One of the capabilities of the human vision process when visualising images is the ability to visualise them at different levels of details. A segmentation procedure has been developed to mimic this capability of human vision process. The developed hierarchical clustering based segmentation (HCS) procedure automatically generates a hierarchy of segmented images. The hierarchy represents the continuous merging of similar, spatially adjacent or disjoint, regions as the allowable threshold value of dissimilarity between regions, for merging, is gradually increased. By the very nature of the HCS procedure a large amount of visual information is produced. A graphical user interface (GUI) was designed to present the segmentation output in an informative way for the user to view and interpret. In addition the GUI displays the original image data by optimally mapping the range of data values to the available 256 gray level values. The purpose of this paper is to describe the development of the designed image visualisation system and to demonstrate some of its functionalities

    Computer vision methods for optical microscopes

    No full text
    As the fields of micro- and nano-technology mature, there will be an increased need to build tools that are able to work in these areas. Industry will require solutions for assembling and manipulating components, much as it has done in the macro range. With this need in mind, a new set of challenges requiring novel solutions have to be met. One of them is the ability to provide closed-loop feedback control for manipulators. We foresee that machine vision will play a leading role in this area. This paper introduces a technique for integrating machine vision into the field of micro-technology including two methods, one for tracking and one for depth reconstruction under an optical microscope. (C) 2006 Elsevier B.V. All rights reserved
    corecore